-
Notifications
You must be signed in to change notification settings - Fork 629
[Feat] Enable mix palacement for DSR1 #4470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a 'mix-placement' feature, which appears to allow shared experts to be treated as regular experts, affecting model configuration, expert map management, and MoE operations. A new patch for DeepSeek V2/V3 models is added to support this feature. The changes are extensive and touch several parts of the codebase.
My review has identified a few critical issues that need to be addressed. There's a potential for a runtime error in vllm_ascend/eplb/adaptor/vllm_adaptor.py due to an incorrect in-place tensor copy with mismatched shapes. Another critical bug was found in vllm_ascend/ops/fused_moe/moe_mlp.py where the logic for calculating group sizes is flawed and can lead to index errors or incorrect results. Additionally, a hardcoded 'magic number' in vllm_ascend/ops/fused_moe/experts_selector.py reduces code clarity and should be replaced with a named constant.
Other changes, such as refactoring and bug fixes in vllm_ascend/ops/fused_moe/fused_moe.py, seem correct and improve the code.
| value=-1 | ||
| ) | ||
| self.expert_map_per_layer[layer_id].copy_(updated_expert_map_padded) | ||
| self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The in-place copy self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) will raise a RuntimeError if updated_expert_map has a different shape than self.expert_map_per_layer_cpu[layer_id]. The logic for padding updated_expert_map for the device tensor self.expert_map_per_layer suggests that shape mismatches are expected. The CPU-side map should be handled in a way that accommodates shape changes to avoid crashes. Reassigning the tensor, as was done previously, is a safer approach.
| self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) | |
| self.expert_map_per_layer_cpu[layer_id] = updated_expert_map.clone() |
| group_diff = torch.diff(group_list) | ||
| new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation of new_group from group_list (which appears to be a cumulative sum of group sizes) is incorrect and can lead to runtime errors or incorrect behavior.
- If
group_listcontains only one element,torch.diff(group_list)will be empty, andgroup_diff[0]will raise anIndexError. - If
group_listis[g1, g1+g2, ...],group_diffwill be[g2, g3, ...]. The current logictorch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)would produce[g2, g2, g3, ...], which is incorrect as the first group size should beg1.
The correct way to get group sizes from a cumulative sum tensor is to use torch.diff with the first element of group_list prepended.
| group_diff = torch.diff(group_list) | |
| new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) | |
| new_group = torch.cat((group_list[0:1], torch.diff(group_list))) |
| pad_shared_expert_weights = torch.full((topk_weights.shape[0], 1), | ||
| 0.4, | ||
| dtype=topk_weights.dtype, | ||
| device=topk_weights.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value 0.4 is used as a hardcoded weight for the padded shared expert. This "magic number" makes the code harder to understand and maintain. It's unclear why this specific value is chosen and what its implications are, especially concerning weight normalization which typically expects weights to sum to 1. This value should be defined as a named constant with an explanatory comment, or passed as a parameter to the function to improve clarity and maintainability.
7a21148 to
a6d11e2
Compare
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
| # self.global_redundant_expert_num = ( | ||
| # self.expert_load_balancer.get_global_redundant_expert_num()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused code should be removed rather than commented out.
|
Please sign your commits with |
e11e95a to
2a1687b
Compare
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Signed-off-by: Che Ruan <[email protected]>
Signed-off-by: Che Ruan <[email protected]>
Signed-off-by: Che Ruan <[email protected]>
Signed-off-by: Che Ruan <[email protected]>
7d6c8d9 to
d6664ae
Compare
Signed-off-by: Mercykid-bash <[email protected]>
| self.global_redundant_expert_num = ( | ||
| self.expert_load_balancer.get_global_redundant_expert_num()) | ||
| # self.global_redundant_expert_num = ( | ||
| # self.expert_load_balancer.get_global_redundant_expert_num()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented-out code is recommended to be deleted.
| shared_out = self._shared_experts(hidden_states) | ||
| if self._shared_experts is None: | ||
| shared_out = None | ||
| else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is recommended to use ternary expressions to eliminate redundant branches and improve code conciseness.
| # See DeepseekV2DecoderLayer for more details. | ||
| if hidden_states.dtype != torch.float16: | ||
| final_hidden_states *= self.routed_scaling_factor | ||
| elif self.shared_experts is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two branches that seem to have no direct correlation in conditions or internal execution, which can easily lead to maintenance confusion.
| final_hidden_states, 0 | ||
| ) | ||
| final_hidden_states = final_hidden_states[:num_tokens] | ||
| elif self.tp_size > 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here as well, there are two branches that seem to have no direct connection in terms of conditions or internal execution, which can easily lead to maintenance confusion.
| if name_mapped not in params_dict.keys(): | ||
| continue | ||
| param = params_dict[name_mapped] | ||
| weight_loader = typing.cast(Callable[..., bool], param.weight_loader) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forced type conversion does not verify whether weight_loader actually supports the call.
| default_weight_loader) | ||
| weight_loader(param, loaded_weight) | ||
| if not is_fuse_shared_experts_layer: | ||
| loaded_params.add(name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The validity of the name was not verified, potentially allowing the addition of None or invalid names, which could contaminate the result set.
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Uh oh!
There was an error while loading. Please reload this page.